58 research outputs found

    Modified Structured Domain Randomization in a Synthetic Environment for Learning Algorithms

    Get PDF
    Deep Reinforcement Learning (DRL) has the capability to solve many complex tasks in robotics, self-driving cars, smart grids, finance, healthcare, and intelligent autonomous systems. During training, DRL agents interact freely with the environment to arrive at an inference model. Under real-world conditions this training creates difficulties of safety, cost, and time considerations. Training in synthetic environments helps overcome these difficulties, however, this only approximates real-world conditions resulting in a ‘reality gap’. The synthetic training of agents has proven advantageous but requires methods to bridge this reality gap. This work addressed this through a methodology which supports agent learning. A framework which incorporates a modifiable synthetic environment integrated with an unmodified DRL algorithm was used to train, test, and evaluate agents while using a modified Structured Domain Randomization (SDR+) technique. It was hypothesized that the application of environment domain randomizations (DR) during the learning process would allow the agent to learn variability and adapt accordingly. Experiments using the SDR+ technique included naturalistic and physical-based DR while applying the concept of context-aware elements (CAE) to guide and speed up agent training. Drone racing served as the use case. The experimental framework workflow generated the following results. First, a baseline was established by training and validating an agent in a generic synthetic environment void of DR and CAE. The agent was then tested in environments with DR which showed degradation of performance. This validated the reality gap phenomenon under synthetic conditions and established a metric for comparison. Second, an SDR+ agent was successfully trained and validated under various applications of DR and CAE. Ablation studies determine most DR and CAE effects applied had equivalent effects on agent performance. Under comparison, the SDR+ agent’s performance exceeded that of the baseline agent in every test where single or combined DR effects were applied. These tests indicated that the SDR+ agent’s performance did improve in environments with applied DR of the same order as received during training. The last result came from testing the SDR+ agent’s inference model in a completely new synthetic environment with more extreme and additional DR effects applied. The SDR+ agent’s performance was degraded to a point where it was inconclusive if generalization occurred in the form of learning to adapt to variations. If the agent’s navigational capabilities, control/feedback from the DRL algorithm, and the use of visual sensing were improved, it is assumed that future work could exhibit indications of generalization using the SDR+ technique

    The Lyman-alpha forest at redshifts 0.1 -- 1.6: good agreement between a large hydrodynamic simulation and HST spectra

    Full text link
    We give a comprehensive statistical description of the Lyman-alpha absorption from the intergalactic medium in a hydrodynamic simulation at redshifts 0.1-1.6, the range of redshifts covered by HST spectra of QSOs. We use the ENZO code to make a 76 comoving Mpc cube simulation using 75 kpc cells, for a Hubble constant of 71 km/s/Mpc. The best prior work, by \citet{dave99},used an SPH simulation in a 15.6 Mpc box with an effective resolution of 245 kpc and slightly different cosmological parameters. At redshifts z=2 this simulation is different from data. \citet{tytler07b} found that the simulated spectra at z=2 have too little power on large scales, Lyman-alpha lines are too wide, there is a lack high column density lines, and there is a lack of pixels with low flux. Here we present statistics at z<1.6, including the flux distribution, the mean flux, the effective opacity, and the power and correlation of the flux. We also give statistics of the lyman alpha lines including the line width distribution, the column density distribution, the number of lines per unit equivalent width and redshift, and the correlation between the line width and column density. We find that the mean amount of absorption in the simulated spectra changes smoothly with redshift with DA(z)=0.01(1+z)^{2.25}. Both the trend and absolute values are close to measurements of HST spectra by \citet{kirkman07a}. The column density and line width distributions are also close to those measured from HST spectra by \citet{janknecht06a}, except for the mode of the line width distribution which is smaller in the HST spectra. Although some differences that we saw at z=2 are too subtle to be seen in existing HST spectra, overall, the simulation gives an good description of HST spectra at 0.1<z<1.6

    The Effect of Large-Scale Power on Simulated Spectra of the Lya forest

    Full text link
    We study the effects of box size on ENZO simulations of the intergalactic medium (IGM) at z = 2. We follow statistics of the cold dark matter (CDM) and the Lya absorption. We find that the larger boxes have fewer pixels with significant absorption (flux < 0.96) and more pixels in longer stretches with little or no absorption, and they have wider Lya lines. We trace these effect back to the additional power in larger boxes from longer wavelength modes. The IGM in our larger boxes is hotter, from increased pressure heating due to faster hydrodynamical infall. When we increase the photoheating in smaller boxes to compensate, their Lya statistics change to mimic those of a box of twice the size. Statistics converge towards their value in the largest (76.8 Mpc) box, except for the most common value of the CDM density which continues to rise. When we compare to errors with data, we find that our 76.8 Mpc box is larger than we need for the mean flux, barely large enough for the column density distribution and the power spectrum of the flux, and too small for the line widths. This box with 75 kpc cells has approximately the same mean flux as QSO spectra, but the Lya lines are too wide by 2.6 km/s, there are too few lines with log H I column densities > 10^17 cm^-2, and the power of the flux is too low by 20 - 50%, from small to large scales. Four times smaller cell size does not resolve these differences, nor do simple changes to the ultraviolet background that drives the H and He II ionization. It is hard to see how simulations using popular cosmological and astrophysical parameters can match Lyman-alpha forest data at z=2

    Quasars near the line of sight towards Q 0302-003 and the transverse proximity effect

    Full text link
    We report the discovery of the faint (V=21.7) quasar QSO03027-0010 at z=2.808 in the vicinity of Q0302-003, one of the few quasars observed with STIS to study intergalactic HeII absorption. Together with another newly discovered QSO at z=2.29, there are now 6 QSOs known near the line of sight towards Q0302-003, of which 4 are located within the redshift region 2.76<=z<=3.28 covered by the STIS spectrum. We correlated the opacity variations in the HI and HeII Lyman forest spectra with the locations of known quasars. There is no significant proximity effect in the HI Lyman alpha forest for any of the QSOs, except for the well-known line of sight effect for Q0302-003 itself. By comparing the absorption properties in HI and HeII, we estimated the fluctuating hardness of the extragalactic UV radiation field along this line of sight. We find that close to each foreground quasar, the ionizing background is considerably harder than on average. In particular, our newly discovered QSO03027-0010 shows such a hardness increase despite being associated with an overdensity in the HI Lyman forest. We argue that the spectral hardness is a sensitive physical measure to reveal the influence of QSOs onto the UV background even over scales of several Mpc, and that it breaks the density degeneracy hampering the traditional transverse proximity effect analysis. We infer from our sample that there is no need for significantly anisotropic UV radiation from the QSOs. From the transverse proximity effect detected in the sample we obtain minimum quasar lifetimes in the range ~10-30 Myr.Comment: 15 pages, 9 figures, accepted by A&A, problems with paper size fixe

    Predicting the Clustering of X-Ray Selected Galaxy Clusters in Flux-Limited Surveys

    Get PDF
    (abridged) We present a model to predict the clustering properties of X-ray clusters in flux-limited surveys. Our technique correctly accounts for past light-cone effects on the observed clustering and follows the non-linear evolution in redshift of the underlying DM correlation function and cluster bias factor. The conversion of the limiting flux of a survey into the corresponding minimum mass of the hosting DM haloes is obtained by using theoretical and empirical relations between mass, temperature and X-ray luminosity of clusters. Finally, our model is calibrated to reproduce the observed cluster counts adopting a temperature-luminosity relation moderately evolving with redshift. We apply our technique to three existing catalogues: BCS, XBACs and REFLEX samples. Moreover, we consider an example of possible future space missions with fainter limiting flux. In general, we find that the amplitude of the spatial correlation function is a decreasing function of the limiting flux and that the EdS models always give smaller correlation amplitudes than open or flat models with low matter density parameter. In the case of XBACs, the comparison with previous estimates of the observational spatial correlation shows that only the predictions of models with Omega_0m=0.3 are in good agreement with the data, while the EdS models have too low a correlation strength. Finally, we use our technique to discuss the best strategy for future surveys. Our results show that the choice of a wide area catalogue, even with a brighter limiting flux, is preferable to a deeper, but with smaller area, survey.Comment: 20 pages, Latex using MN style, 11 figures enclosed. Version accepted for publication in MNRA

    Numerical simulations of the Warm-Hot Intergalactic Medium

    Get PDF
    In this paper we review the current predictions of numerical simulations for the origin and observability of the warm hot intergalactic medium (WHIM), the diffuse gas that contains up to 50 per cent of the baryons at z~0. During structure formation, gravitational accretion shocks emerging from collapsing regions gradually heat the intergalactic medium (IGM) to temperatures in the range T~10^5-10^7 K. The WHIM is predicted to radiate most of its energy in the ultraviolet (UV) and X-ray bands and to contribute a significant fraction of the soft X-ray background emission. While O VI and C IV absorption systems arising in the cooler fraction of the WHIM with T~10^5-10^5.5 K are seen in FUSE and HST observations, models agree that current X-ray telescopes such as Chandra and XMM-Newton do not have enough sensitivity to detect the hotter WHIM. However, future missions such as Constellation-X and XEUS might be able to detect both emission lines and absorption systems from highly ionised atoms such as O VII, O VIII and Fe XVII.Comment: 18 pages, 5 figures, accepted for publication in Space Science Reviews, special issue "Clusters of galaxies: beyond the thermal view", Editor J.S. Kaastra, Chapter 14; work done by an international team at the International Space Science Institute (ISSI), Bern, organised by J.S. Kaastra, A.M. Bykov, S. Schindler & J.A.M. Bleeke

    Evolution at z>0.5 of the X-ray properties of simulated galaxy clusters: comparison with the observational constraints

    Full text link
    (ABRIDGED) We analyze the X-ray properties of a sample of local and high redshift galaxy clusters extracted from a large cosmological hydrodynamical simulation. This simulation has been realized using the Tree+SPH code GADGET-2 for a LambdaCDM model. In our analysis, we consider only objects with T_ew >2 keV and adopt an approach that mimics observations, associating with each measurement an error comparable with recent observations and providing best-fit results via robust techniques. Within the clusters, baryons are distributed among (i) a cold neutral phase, with a relative contribution that increases from less than 1 to 3 per cent at higher redshift, (ii) stars which contribute with about 20 per cent and (iii) the X-ray emitting plasma that contributes by 80 (76) per cent at z=0 (1) to the total baryonic budget. A depletion of the cosmic baryon fraction of ~7 (at z=0) and 5 (at z=1) per cent is measured at the virial radius, R_vir, in good agreement with adiabatic hydrodynamical simulations. We confirm that, also at redshift >0.5, power-law relations hold between gas temperature, T, bolometric luminosity, L, central entropy, S, gas mass, M_gas, and total gravitating mass, M_tot and that these relations are steeper than predicted by simple gravitational collapse. A significant, negative evolution in the L-T and L-M_tot relations and positive evolution in the S-T relation are detected at 0.5 < z < 1 in this set of simulated galaxy clusters. This is partially consistent with recent analyses of the observed properties of z>0.5 X-ray galaxy clusters. By fixing the slope to the values predicted by simple gravitational collapse, we measure at high redshift normalizations lower by 10-40 per cent in the L-T, M_tot-T, M_gas-T, f_gas-T and L-M_tot relations than the observed estimates.Comment: 13 pages, MNRAS in pres

    Inflation, cold dark matter, and the central density problem

    Full text link
    A problem with high central densities in dark halos has arisen in the context of LCDM cosmologies with scale-invariant initial power spectra. Although n=1 is often justified by appealing to the inflation scenario, inflationary models with mild deviations from scale-invariance are not uncommon and models with significant running of the spectral index are plausible. Even mild deviations from scale-invariance can be important because halo collapse times and densities depend on the relative amount of small-scale power. We choose several popular models of inflation and work out the ramifications for galaxy central densities. For each model, we calculate its COBE-normalized power spectrum and deduce the implied halo densities using a semi-analytic method calibrated against N-body simulations. We compare our predictions to a sample of dark matter-dominated galaxies using a non-parametric measure of the density. While standard n=1, LCDM halos are overdense by a factor of 6, several of our example inflation+CDM models predict halo densities well within the range preferred by observations. We also show how the presence of massive (0.5 eV) neutrinos may help to alleviate the central density problem even with n=1. We conclude that galaxy central densities may not be as problematic for the CDM paradigm as is sometimes assumed: rather than telling us something about the nature of the dark matter, galaxy rotation curves may be telling us something about inflation and/or neutrinos. An important test of this idea will be an eventual consensus on the value of sigma_8, the rms overdensity on the scale 8 h^-1 Mpc. Our successful models have values of sigma_8 approximately 0.75, which is within the range of recent determinations. Finally, models with n>1 (or sigma_8 > 1) are highly disfavored.Comment: 13 pages, 6 figures. Minor changes made to reflect referee's Comments, error in Eq. (18) corrected, references updated and corrected, conclusions unchanged. Version accepted for publication in Phys. Rev. D, scheduled for 15 August 200

    Information field theory for cosmological perturbation reconstruction and non-linear signal analysis

    Full text link
    We develop information field theory (IFT) as a means of Bayesian inference on spatially distributed signals, the information fields. A didactical approach is attempted. Starting from general considerations on the nature of measurements, signals, noise, and their relation to a physical reality, we derive the information Hamiltonian, the source field, propagator, and interaction terms. Free IFT reproduces the well known Wiener-filter theory. Interacting IFT can be diagrammatically expanded, for which we provide the Feynman rules in position-, Fourier-, and spherical harmonics space, and the Boltzmann-Shannon information measure. The theory should be applicable in many fields. However, here, two cosmological signal recovery problems are discussed in their IFT-formulation. 1) Reconstruction of the cosmic large-scale structure matter distribution from discrete galaxy counts in incomplete galaxy surveys within a simple model of galaxy formation. We show that a Gaussian signal, which should resemble the initial density perturbations of the Universe, observed with a strongly non-linear, incomplete and Poissonian-noise affected response, as the processes of structure and galaxy formation and observations provide, can be reconstructed thanks to the virtue of a response-renormalization flow equation. 2) We design a filter to detect local non-linearities in the cosmic microwave background, which are predicted from some Early-Universe inflationary scenarios, and expected due to measurement imperfections. This filter is the optimal Bayes' estimator up to linear order in the non-linearity parameter and can be used even to construct sky maps of non-linearities in the data.Comment: 38 pages, 6 figures, LaTeX; version accepted by PR

    Сетевая система контроля технологического процесса выращивания полупроводниковых кристаллов и тонких пленок

    Get PDF
    Экспериментальное моделирование аппаратно-программного обеспечения показало достаточную надежность работы системы и значительное уменьшение трудоемкости контроля и управления параметрами технологического процесса
    corecore